Go Back Term1
AI is a neural network capable of learning and self-configuration. Machine learning involves automating tasks by providing examples (training data) instead of writing explicit instructions in code. AI tools rely on statistics to propose the most statistically probable output.
A dataset is a sample of data used to train neural networks. It can be quantitative or qualitative and always introduces bias, which remains present in the trained AI system. Websites with existing datasets include: Kaggle, PapersWithCode.
The model is the trained neural network. Neural networks can be trained with different datasets, allowing one neural network to transform into different models. For example, an image recognition neural network can be trained to recognize light, nature, faces, or objects.
A Python library is a set of useful functionalities that can be reused instead of coding them from scratch.
Access Google Colab with API 1: Google Colab API 1. Obtain the API token: Replicate API Tokens.
Team: Vania, Anna, Everardo, Sophie
This project aims to explore AI tools and suggest a solution for a current real-life problem. As a team, we were interested in topics related to body, real vs. online identity, human behaviors, and surveillance. Consequently, we decided to look into emotion detection and facial recognition models.
In the initial stages, we brainstormed various proposals where facial recognition or emotion detection could be useful. Some of our ideas included a lie detector, assistance for helping visually impaired individuals identify emotions, an emotion collector integrated into a product, and utilizing emotions to detect mental health issues. However, after careful consideration, we opted to explore the potential of AI for futuristic human romantic interactions: specifically, how we could use a flirt detector to help people find love.
Using emotion detection to detect flirty behaviors, we can help people with a more efficient way to understand when people are flirting in face-to-face settings so that they find love easily.
We explored Kaggle, Huggingface, Reflect to find models or APIs relevant to our project. Eventually, we stumbled upon a model created by an Italian developer on GitHub who utilizes DeepFace and OpenCV to construct an emotion detection model specifically designed for computer vision.
DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. It has 98% accuracy, which is more than the human eye.
OpenCV is a large open-source library for computer vision, machine learning, and image processing and now it plays a major role for real-time operations in today’s AI systems.
As we dove into the GitHub model to integrate it into our project, we realized it required us to enable the use of the webcam for real-time videos. As we were working with Google Colab, we could not open our own cameras as it operates on a distant hosting network, not on our computer. We, therefore, had to adapt the code given on GitHub with Pau and Marc’s help, to have the AI detect emotions on videos that we would upload to the hosting network ourselves. We, however, acknowledged that in a subsequent exploration, we could experiment with real-life videos.
Coding this system on Google Colab required us to clone the GitHub repository and install both the DeepFace and OpenCV libraries.
We then coded the model to recognize faces and put a frame around faces on videos. At first, the system was recognizing faces in objects, so with Marc’s help, we refined the code for the face recognition mechanism to be more precise.
Once that was done, we had to cut videos as they were going to be processed for too long: the emotion recognition system was analyzing 60 frames per second.
The second part of the code was using commands to recognize emotions in these frames. However, when running that part of the code, the faces were being recognized and framed in the videos but the emotions were not being labeled. This was because our model was not finding emotions, but Marc helped us fix that.
The model was an emotion recognition tool, which means it detected many emotions through the DeepFace library: anger, happiness, sadness, surprise, disgust, fear, and neutral. As love wasn’t part of the available emotions, we decided to treat happiness as love for the purpose of our project and label it as such in the code. We then asked the AI to only label emotions on the video when the dominant emotion was happiness.
We found a model that was already trained with a dataset of pictures from Facebook users. However, if this wouldn’t have been the case, we could have used this dataset to train a relevant neural network. The dataset contains 35,685 examples of 48x48 pixel grayscale images of faces divided into a train and test dataset. Images are categorized based on the emotion shown in the facial expressions (happiness, neutral, sadness, anger, surprise, disgust, fear).
We first experimented with videos of ourselves and then downloaded videos from the internet. There is evident bias in our results considering that happiness was changed to love for the purposes of our project.
It's important to note that the idea of a love detector is more fictional or based on pseudoscience rather than a well-established scientific concept, although academic research on the body language of love exists.
A love detector might be used as a novelty or entertainment device for events, parties, or amusement parks. People may find it fun to use such a device even if it doesn't have any real scientific basis. Artists or creators may use the concept of a love detector as part of their projects, whether in movies, literature, or interactive installations. This could be especially useful in art installations willing to trigger reflection on our understanding of emotions and relationship to them, and how AI might perceive them. Companies may use the idea of a love detector in marketing campaigns to promote products or services related to relationships and romance, such as dating apps. In a more serious context, a hypothetical love detector might be used in relationship counseling or therapy to facilitate discussions about emotions and communication. Researchers in fields such as psychology or human-computer interaction might explore the concept of a love detector for academic purposes, although such exploration would need to be grounded in rigorous scientific methodology.
More widely, an emotion detector could have more impactful uses, such as helping visually impaired individuals with sensing emotions, if these are translated into sounds through Arduino, for example. This could also help neurodivergent individuals understand people’s emotions better.
A love detector that involves monitoring and interpreting personal interactions may raise concerns about invasion of privacy. Individuals may not consent to having their emotions and flirting behaviors analyzed, especially in private or intimate settings. The use of such technology without clear consent could lead to violations of privacy rights.
Emotions, especially those related to love and attraction, are intricate and can vary widely among individuals. AI systems may struggle to accurately interpret the subtleties and nuances of human emotions. Misinterpretation can lead to false conclusions, causing misunderstandings and potential harm to relationships.
Flirting and expressions of affection are strongly influenced by cultural norms and practices. An AI system trained on data from one cultural context may not accurately interpret or recognize behaviors from another culture. This can introduce biases and inaccuracies in the system's assessments. Thorough training and consideration of cultural diversity are essential to mitigate this issue.
Relationships involve a myriad of emotional nuances that extend beyond observable behaviors. Human connection is complex and may involve non-verbal cues, shared experiences, and emotional intelligence that AI lacks. Relying solely on AI may oversimplify the rich and multifaceted nature of human relationships.
Introducing AI for relationship-related surveillance may inadvertently shape societal norms and expectations. Individuals may alter their behavior or interactions knowing they are being monitored, potentially leading to a distortion of natural romantic dynamics. This could impact social trust and the authenticity of human connections.
https://youtu.be/FPSpiY-BYec?list=TLGGBUo2KBGdBqkwNzAxMjAyNA Attached link to Final Video on Youtube.This seminar opened my eyes to the incredible possibilities of artificial intelligence (AI). What's most surprising is realizing how accessible it is to all of us, despite the "black box" perception many people have of AI. I feel inspired to take our emotion recognition model a step further and try implementing emotion recognition using my computer's camera.
This experience has motivated me even more to explore this technology in my master's project, especially because it directly relates to my interest in online identity and surveillance. As more crucial decisions are made by AI systems, such as in recruitment or healthcare, it's essential to understand that these decisions aren't always infallible due to the limitations of the data they rely on.
Additionally, I want to delve deeper into the impact AI can have on our brains. With the increasing reliance on tools like ChatGPT to think for us, the question arises: will this give us more time to develop unique skills that AI can't replicate, or will it, instead, diminish our intellectual potential? Rethinking education will be crucial to learning how to collaborate with AI rather than simply depending on it.